Goto

Collaborating Authors

 make artificial intelligence


Human Misuse Will Make Artificial Intelligence More Dangerous

WIRED

OpenAI CEO Sam Altman expects AGI, or artificial general intelligence--AI that outperforms humans at most tasks--around 2027 or 2028. Elon Musk's prediction is either 2025 or 2026, and he has claimed that he was "losing sleep over the threat of AI danger." As the limitations of current AI become increasingly clear, most AI researchers have come to the view that simply building bigger and more powerful chatbots won't lead to AGI. This story is from the WIRED World in 2025, our annual trends briefing. However, in 2025, AI will still pose a massive risk: not from artificial superintelligence, but from human misuse.


The White House's "AI Bill of Rights" outlines five principles to make artificial intelligence safer, more transparent and less discriminatory

AIHub

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism. Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they have been shown to bolster existing racially biased policies. There are some government recommendations and guidance regarding AI use.


Top 10 AI Books For Beginners - AI Summary

#artificialintelligence

These ten books on Artificial Intelligence will be quite helpful if you are someone who is eager to learn and begin a career in this field of AI, IoT and Robotics. The author Kevin Warwick, a pioneer in the field, examines issues of what it means to be man or machine and looks at advances in robotics which have blurred the boundaries. In her book, award-winning scholar Kate Crawford explains how AI is a technology of extraction: from the minerals dredged from the earth to the labor of low-wage workers to the data collected from every action and expression. Instead of focusing purely on code and algorithms, Crawford offers us a material and political analysis of what it takes to make artificial intelligence, and how it centralizes power. He discusses the near-term benefits we can expect, such as intelligent personal assistants and fast-tracked scientific research, as well as the breakthroughs that will be needed before we can achieve superhuman AI.


How to Make Artificial Intelligence More Democratic

#artificialintelligence

This year, GPT-3, a large language model capable of understanding text, responding to questions and generating new writing examples, has drawn international media attention. The model, released by OpenAI, a California-based nonprofit that builds general-purpose artificial intelligence systems, has an impressive ability to mimic human writing, but just as notable is its massive size. To build it, researchers collected 175 billion parameters (a type of computational unit) and more than 45 terabytes of text from Common Crawl, Reddit, Wikipedia and other sources, then trained it in a process that occupied hundreds of processing units for thousands of hours. GPT-3 demonstrates a broader trend in artificial intelligence. Deep learning, which has in recent years become the dominant technique for creating new AIs, uses enormous amounts of data and computing power to fuel complex, accurate models.


How to Make Artificial Intelligence More Meta

#artificialintelligence

In one of computer science's more meta moments, professor Chelsea Finn created an AI algorithm to evaluate the coding projects of her students. The AI model reads and analyzes code, spot flaws and gives feedback to the students. Computers learning about learning--it's so meta that Finn calls it "meta learning." Finn says the field should forgo training AI for highly specific tasks in favor of training it to look at a diversity of problems to divine the common structure among those problems. The result is AI able to see a problem it has not encountered before and call upon all that previous experience to solve it.

  machine learning, make artificial intelligence, student, (2 more...)
  Industry: Education (0.42)

How Can Artificial Intelligence Benefit Humans

#artificialintelligence

You are in an era where you can see the fast evolution of technology from being reactive to proactive. You are witness to the time when computers are slowly taking up space in every aspect of your life. You will have some form of electronic intelligence around you. Why is the song you hear on your favourite music app driven by algorithms that use artificial intelligence? What is this artificial intelligence, and in what other ways does this technology help?


The need to make artificial Intelligence more accessible in India

#artificialintelligence

Does your head spin a bit while reading this sentence above? Did you take one look at this jumble of tech terms and declare in your mind that it is "too geeky" for you? It could be just a matter of communication--instead of throwing tech terms at people, if you make regular people understand what something like AI can do to make their lives better, they may accept it better. And that may be the most important step in India's tryst with artificial intelligence, as per the overwhelming sentiment att Nasscom's Experience AI summit Thursday afternoon. "AI is going to be as ubiquitous as electricity, it will become such an integral part of everything," exclaimed Arundhati Bhattacharya, CEO & Chairperson of Salesforce India and the former head of State Bank of India.

  india, make artificial intelligence, tech term, (3 more...)
  Country:
  Industry: Banking & Finance (1.00)

How to make artificial intelligence more democratic

#artificialintelligence

This year, GPT-3, a large language model capable of understanding text, responding to questions and generating new writing examples, has drawn international media attention. The model, released by OpenAI, a California-based nonprofit that builds general-purpose artificial intelligence systems, has an impressive ability to mimic human writing, but just as notable is its massive size. To build it, researchers collected 175 billion parameters (a type of computational unit) and more than 45 terabytes of text from Common Crawl, Reddit, Wikipedia and other sources, then trained it in a process that occupied hundreds of processing units for thousands of hours. GPT-3 demonstrates a broader trend in artificial intelligence. Deep learning, which has in recent years become the dominant technique for creating new AIs, uses enormous amounts of data and computing power to fuel complex, accurate models.


How to Make Artificial Intelligence Less Biased

#artificialintelligence

How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible. But as AI has become more pervasive--as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more--investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren't always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice. In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population--in particular women and minorities.


How to Make Artificial Intelligence Less Biased « Machine Learning Times

#artificialintelligence

How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible. But as AI has become more pervasive--as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more--investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren't always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice. In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population--in particular women and minorities. And the impact of this bias can be devastating on swaths of the population--for instance, denying loans to creditworthy women much more frequently than denying loans to creditworthy men.